This article investigates the economic consequences of data errors in the information flows associated with business processes. We develop a process modeling-based methodology for managing the risks associated with such data errors. Our method focuses on the topological structure of a process and takes into account its effect on error propagation and risk mitigation using both expected loss and conditional value-at-risk risk measures. Using this method, optimal strategies can be designed for control resource allocation to manage risk in a business process. Our work contributes to the literature on both ex ante risk management-based business process design and ex post risk assessments of existing business processes and control models. This research applies not only to the literature on and practice of process design and risk management but also to business decision support systems in general. An order-fulfillment process of an online pharmacy is used to illustrate the methodology.
Collaborative filtering algorithms learn from the ratings of a group of users on a set of items to find personalized recommendations for each user. Traditionally they have been designed to work with one-dimensional ratings. With interest growing in recommendations based on multiple aspects of items, we present an algorithm for using multicomponent rating data. The presented mixture model-based algorithm uses the component rating dependency structure discovered by a structure learning algorithm. The structure is supported by the psychometric literature on the halo effect. This algorithm is compared with a set of model-based and instancebased algorithms for single-component ratings and their variations for multicomponent ratings. We evaluate the algorithms using data from Yahoo! Movies. Use of multiple components leads to significant improvements in recommendations. However, we find that the choice of algorithm depends on the sparsity of the training data. It also depends on whether the task of the algorithm is to accurately predict ratings or to retrieve relevant items. In our experiments a model-based multicomponent rating algorithm is able to better retrieve items when training data are sparse. However, if the training data are not sparse, or if we are trying to predict the rating values accurately, then the instance-based multicomponent rating collaborative filtering algorithms perform better. Beyond generating recommendations we show that the proposed model can fill in missing rating components. Theories in psychometric literature and the empirical evidence suggest that rating specific aspects of a subject is difficult. Hence, filling in the missing component values leads to the possibility of a rater support system to facilitate gathering of multicomponent ratings.
We analyze learning and knowledge transfer in a computing call center. The information technology (IT) technical services provided by call centers are characterized by constant changes in relevant knowledge and a wide variety of support requests. Under this IT problem-solving context, we analyze the learning curve relationship between problem-solving experience and performance enhancement. Based on data collected from a university computing call center consisting of different types of consultants, our empirical findings indicate that (a) the learning effect-as measured by the reduction of average resolution time-occurs with experience, (b) knowledge transfer within a group occurs among lower-level consultants utilizing application-level knowledge (as opposed to technical-level knowledge), and (c) knowledge transfers across IT problem types. These estimates of learning and knowledge transfer contribute to the development of an empirically grounded understanding of IT knowledge workers' learning behavior. The results also have implications for operational decisions about the staffing and problem-solving strategy of call centers.
The article presents research on risk management related to computer network security in management information systems. A queuing model is presented to quantify the downtown loss faced by a network in a security attack and compare it to the costs of investment in computer security technologies, diversification of computer software to limit the risk of coordinated failure and investment in information technology to repair failures. Situations under which the strategy of diversification of computer software are financially advantageous are discussed.
Each market session in a reverse electronic marketplace features a procurer and many suppliers. An important attribute of a market session chosen by the procurer is its information revelation policy. The revelation policy determines the information (such as the number of competitors, the winning bids, etc.) that will be revealed to participating suppliers at the conclusion of each market session. Suppliers participating in multiple market sessions use strategic bidding and fake their own cost structure to obtain information revealed at the end of each market session. The information helps to reduce two types of uncertainties encountered in future market sessions, namely, their opponents' cost structure and an estimate of the number of their competitors. Whereas the first type of uncertainty is present in physical and e-marketplaces, the second type of uncertainty naturally arises in IT-enabled marketplaces. Through their effect on the uncertainty faced by suppliers, information revelation policies influence the bidding behavior of suppliers which, in turn, determines the expected price paid by the procurer. Therefore, the choice of information revelation policy has important consequences for the procurer. This paper develops a partially observable Markov decision process model of supplier bidding behavior and uses a multiagent e-marketplace simulation to analyze the effect that two commonly used information revelation policies—complete information policy and incomplete information policy—have on the expected price paid by the procurer. We find that the expected price under the complete information policy is lower than that under the incomplete information policy. The integration of ideas from the multiagents literature, the machinelearning literature, and the economics literature to develop a method to evaluate information revelation policies in e-marketplaces is a novel feature of this paper.
A key aspect of better and more secure software is timely patch release by software vendors for the vulnerabilities in their products. Software vulnerability disclosure, which refers to the publication of vulnerability information, has generated intense debate. An important consideration in this debate is the behavior of software vendors. How quickly do vendors patch vulnerabilities and how does disclosure affect patch release time? This paper compiles a unique data set from the Computer Emergency Response Team/Coordination Center (CERT) and SecurityFocus to answer this question. Our results suggest that disclosure accelerates patch release. The instantaneous probability of releasing the patch rises by nearly two and a half times because of disclosure. Open source vendors release patches more quickly than closed source vendors. Vendors are more responsive to more severe vulnerabilities. We also find that vendors respond more slowly to vulnerabilities not disclosed by CERT. We verify our results by using another publicly available data set and find that results are consistent. We also show how our estimates can aid policy makers in their decision making.
The need to ensure reliability of data in information systems has long been recognized. However, recent accounting scandals and the subsequent requirements enacted in the Sarbanes-Oxley Act have made data reliability assessment of critical importance to organizations, particularly for accounting data. Using the accounting functions of management information systems as a context, this paper develops an interdisciplinary approach to data reliability assessment. Our work builds on the literature in accounting and auditing, where reliability assessment has been a topic of study for a number of years. While formal probabilistic approaches have been developed in this literature, they are rarely used in practice. The research reported in this paper attempts to strike a balance between the informal, heuristic-based approaches used by auditors and formal, probabilistic reliability assessment methods. We develop a formal, process-oriented ontology of an accounting information system that defines its components and semantic constraints. We use the ontology to specify data reliability assessment requirements and develop mathematical-model-based decision support methods to implement these requirements. We provide preliminary empirical evidence that the use of our approach improves the efficiency and effectiveness of reliability assessments. Finally, given the recent trend toward specifying information systems using executable business process models (e.g., business process execution language), we discuss opportunities for integrating our process-oriented data reliability assessment approach--developed in the accounting context--in other IS application contexts.
We present a model to investigate the competitive implications of electronic secondary markets that promote concurrent selling of new and used goods on a supply chain. In secondary markets where suppliers cannot directly utilize used goods for practicing intertemporal price discrimination and where transaction costs of resales are negligible, the threat of cannibalization of new goods by used goods becomes significant. We examine conditions under which it is optimal for suppliers to operate in such markets, explaining why these markets may not always be detrimental for them. Intuitively, secondary markets provide an active outlet for some high-valuation consumers to sell their used goods. The potential for such resales leads to an increase in consumers' valuation for a new good, leading them to buy an additional new good. Given sufficient heterogeneity in consumers' affinity across multiple suppliers" products, the "market expansion effect" accruing from consumers' cross-product purchase affinity can mitigate the losses incurred by suppliers from the direct "cannibalization effect." We also highlight the strategic role that the used goods commission set by the retailer plays in determining profits for suppliers. We conclude the paper by empirically testing some implications of our model using a unique data set from the online book industry, which has a flourishing secondary market.
Peer-to-peer (P2P) file sharing networks are an important medium for the distribution of information goods. However, there is little empirical research into the optimal design of these networks under real-world conditions. Early speculation about the behavior of P2P networks has focused on the role that positive network externalities play in improving performance as the network grows. However, negative network externalities also arise in P2P networks because of the consumption of scarce network resources or an increased propensity of users to free ride in larger networks, and the impact of these negative network externalities--while potentially important--has received far less attention. Our research addresses this gap in understanding by measuring the impact of both positive and negative network externalities on the optimal size of P2P networks. Our research uses a unique dataset collected from the six most popular OpenNap P2P networks between December 19, 2000, and April 22, 2001. W find that users contribute additional value to the network at a decreasing rate and impose costs on the network at an increasing rate, while the network increases in size. Our results also suggest that users are less likely to contribute resources to the network as the network size increases. Together, these results suggest that the optimal size of these centralized P2P networks is bounded--at some point the costs that a marginal user imposes on the network will exceed the value they provide to the network. This finding is in contrast to early predictions that larger P2P networks would always provide more value to users than smaller networks. Finally, these results also highlight the importance of considering user incentives--an important determinant of resource sharing in P2P networks--in network design.
Retrieving information from heterogeneous database systems involves a complex process and remains a challenging research area.We propose a cognitively guided approach for developing an information-retrieval agent that takes the user's information request, identifies relevant information sources,and generates a multidatabase access plan. Our work is distinctive in that the agent design is based on an empirical study of how human experts retrieve information from multiple, heterogeneous database systems. To improve on empirically observed information-retrieval capabilities, the design incorporates mathematical models and algorithmic components. These components optimize the set of information sources that need to be considered to respond to a user query and are used to develop efficient multidatabase-access plans. This agent design, which integrates cognitive and mathematical models, has been implemented using Soar, a knowledge-based architecture.
The value of mathematical modeling and analysis in the decision support context is well recognized. However, the complex and evolutionary nature of the modeling process has limited its widespread use. In this paper, we describe our work on knowledge-based tools which support the formulation and revision of mathematical programming models. in contrast to previous work on this topic, we base our work on an indepth empirical investigation of experienced modelers and present three results: (a) a model of the modeling process of experienced modelers derived using concurrent verbal protocol analysis. Our analysis indicates that modeling is a synthetic process that relates specific features found in the problem to its mathematical model. These relationships. which are seldom articulated by modelers, are also used to revise models. (b) an implementation of a modeling support system called MODFORM based on this observationally derived model, and (c) the results of a preliminary experiment which indicates that users of MODFORM build models comparable to those formulated by experts. We use the formulation of mathematical programming models of production planning problems illustratively throughout the paper.